Goto

Collaborating Authors

 black-box ai system


The Impact of Transparency in AI Systems on Users' Data-Sharing Intentions: A Scenario-Based Experiment

Rosenberger, Julian, Kuhlemann, Sophie, Tiefenbeck, Verena, Kraus, Mathias, Zschech, Patrick

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) systems are frequently employed in online services to provide personalized experiences to users based on large collections of data. However, AI systems can be designed in different ways, with black-box AI systems appearing as complex data-processing engines and white-box AI systems appearing as fully transparent data-processors. As such, it is reasonable to assume that these different design choices also affect user perception and thus their willingness to share data. To this end, we conducted a pre-registered, scenario-based online experiment with 240 participants and investigated how transparent and non-transparent data-processing entities influenced data-sharing intentions. Surprisingly, our results revealed no significant difference in willingness to share data across entities, challenging the notion that transparency increases data-sharing willingness. Furthermore, we found that a general attitude of trust towards AI has a significant positive influence, especially in the transparent AI condition, whereas privacy concerns did not significantly affect data-sharing decisions.


Interview with Pulkit Verma: Towards safe and reliable behavior of AI agents

AIHub

In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In this latest interview, we hear from Pulkit Verma, recent PhD graduate from Arizona State University. I recently completed my PhD in Computer Science from School of Computing and Augmented Intelligence, Arizona State University. My research focuses on safe and reliable behavior of AI agents.


AI models need to be 'interpretable' rather than just 'explainable'

#artificialintelligence

Last November, Apple ran into trouble after customers pointed out on Twitter that its credit card service was discriminating against women. David Heinemeir Hansson, the creator of Ruby on Rails, called Apple Card a sexist program. "Apple's black box algorithm thinks I deserve 20x the credit limit [my wife] does," he tweeted. The success of deep learning in the past decade has increased interest in the field of artificial intelligence. But the rising popularity of AI has also highlighted some of the key problems of the field, including the "black box problem," the challenge of making sense of the way complex machine learning algorithms make decisions.


The dangers of trusting black-box machine learning

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Last November, Apple ran into trouble after customers pointed out on Twitter that its credit card service was discriminating against women. David Heinemeir Hansson, the creator of Ruby on Rails, called Apple Card a sexist program. "Apple's black box algorithm thinks I deserve 20x the credit limit [my wife] does," he tweeted. The @AppleCard is such a fucking sexist program.


Artificial Intelligence – A Counterintelligence Perspective: Part IV

#artificialintelligence

In my first post in this series, I wrote that one definition of artificial intelligence (AI) is a machine that thinks. Several people with technical backgrounds in the AI field reached out to me after reading that post. One comment I received that I found striking is that AI is neither A nor I. Instead, it is just computer code. Nothing is thinking; a computer is just following directions. And AI is just inputs to outputs for a goal.